612 research outputs found

    Governance in the earthquake area and the Energy Port Region Groningen:Public-private partnerships as a panacea for a sustainable future?

    Get PDF
    Ever since the recognition of the causality between earthquakes in the Region Groningen, gas production and the ensuing damage to houses and buildings in that area, government faces big challenges in policy-making. On the one hand liability for damages must result in fast and effective repair of houses and buildings and in safety safeguards for the infrastructure. On the other hand public trust in governmental institutions in the Earthquake area Groningen has to be restored. As a result of the advice of the Commission ‘Sustainable Future North East Groningen’ a comprehensive package of measures called ‘Trust in restoration, Restoration of trust’ (‘Vertrouwen op herstel, Herstel van vertrouwen’) was announced in which public-private partnerships were introduced for the purpose and in favor of the economic perspective of the region, including the establishment of local initiatives on sustainable energy, damage repair and guaranteeing a confidential approach by the government. Multiple actors are involved in the execution of this package of measures, since the competence of decision-making lies at State, regional and local level. Together with the emergence of public-private partnerships this all results in a very complex case of multi-level governance and policy-making. The central research question this paper addresses is whether public-private partnerships contribute in a legal and effective manner to policy-making following the package of measures ‘Trust in restoration, Restoration of trust’ in the Energy Port Region Groningen. The central question in this paper is whether the chosen form of governance attributes to a sustainable future of the region. This is done by a critical analysis of the Economic Board Groningen, being a public-private partnership-initiative in the earthquake area Groningen and its additional value in achieving the objectives of the comprehensive package of measures, including the above mentioned economic perspective in the Energy Port Region Groningen and related local energy initiatives. Analytical notions of multi-level governance will be integrated in concepts and discourses found in the literature on modes of governance and policy-making related to public-private partnerships. We will demonstrate a gap between effectual approaches of governance and regulatory impediments as well as a description of policy-accumulation in this multi-level governance setting

    Distributed Join Approaches for W3C-Conform SPARQL Endpoints

    Get PDF
    Currently many SPARQL endpoints are freely available and accessible without any costs to users: Everyone can submit SPARQL queries to SPARQL endpoints via a standardized protocol, where the queries are processed on the datasets of the SPARQL endpoints and the query results are sent back to the user in a standardized format. As these distributed execution environments for semantic big data (as intersection of semantic data and big data) are freely accessible, the Semantic Web is an ideal playground for big data research. However, when utilizing these distributed execution environments, questions about the performance arise. Especially when several datasets (locally and those residing in SPARQL endpoints) need to be combined, distributed joins need to be computed. In this work we give an overview of the various possibilities of distributed join processing in SPARQL endpoints, which follow the SPARQL specification and hence are "W3C conform". We also introduce new distributed join approaches as variants of the Bitvector-Join and combination of the Semi- and Bitvector-Join. Finally we compare all the existing and newly proposed distributed join approaches for W3C conform SPARQL endpoints in an extensive experimental evaluation

    A Self-Optimizing Cloud Computing System for Distributed Storage and Processing of Semantic Web Data

    Get PDF
    Clouds are dynamic networks of common, off-the-shell computers to build computation farms. The rapid growth of databases in the context of the semantic web requires efficient ways to store and process this data. Using cloud technology for storing and processing Semantic Web data is an obvious way to overcome difficulties in storing and processing the enormously large present and future datasets of the Semantic Web. This paper presents a new approach for storing Semantic Web data, such that operations for the evaluation of Semantic Web queries are more likely to be processed only on local data, instead of using costly distributed operations. An experimental evaluation demonstrates the performance improvements in comparison to a naive distribution of Semantic Web data

    Constructing Large-Scale Semantic Web Indices for the Six RDF Collation Orders

    Get PDF
    The Semantic Web community collects masses of valuable and publicly available RDF data in order to drive the success story of the Semantic Web. Efficient processing of these datasets requires their indexing. Semantic Web indices make use of the simple data model of RDF: The basic concept of RDF is the triple, which hence has only 6 different collation orders. On the one hand having 6 collation orders indexed fast merge joins (consuming the sorted input of the indices) can be applied as much as possible during query processing. On the other hand constructing the indices for 6 different collation orders is very time-consuming for large-scale datasets. Hence the focus of this paper is the efficient Semantic Web index construction for large-scale datasets on today's multi-core computers. We complete our discussion with a comprehensive performance evaluation, where our approach efficiently constructs the indices of over 1 billion triples of real world data

    Social learning in solitary juvenile sharks

    Get PDF
    Social learning can be a shortcut for acquiring locally adaptive information. Animals that live in social groups have better access to social information, but gregarious and nonsocial species are also frequently exposed to social cues. Thus, social learning might simply reflect an animal\u27s general ability to learn rather than an adaptation to social living. Here, we investigated social learning and the effect of frequency of social exposure in nonsocial, juvenile Port Jackson sharks, Heterodontus portusjacksoni. We compared (1) Individual Learners, (2) Sham-Observers, paired with a naïve shark, and (3) Observers, paired with a trained demonstrator, in a novel foraging task. We found that more Observers learnt the foraging route compared to Individual Learners or Sham-Observers, and that Individual Learners took more days to learn. Training frequency did not affect learning rate, suggesting acquisition occurred mostly between training bouts. When demonstrators were absent, 30% of observers maintained their performance above the learning criterion, indicating they retained the acquired information. These results indicate that social living is not a prerequisite for social learning in elasmobranchs and suggest social learning is ubiquitous in vertebrates

    PatTrieSort - External String Sorting based on Patricia Tries

    Get PDF
    External merge sort belongs to the most efficient and widely used algorithms to sort big data: As much data as fits inside is sorted in main memory and afterwards swapped to external storage as so called initial run. After sorting all the data in this way block-wise, the initial runs are merged in a merging phase in order to retrieve the final sorted run containing the completely sorted original data. Patricia tries are one of the most space-efficient ways to store strings especially those with common prefixes. Hence, we propose to use patricia tries for initial run generation in an external merge sort variant, such that initial runs can become large compared to traditional external merge sort using the same main memory size. Furthermore, we store the initial runs as patricia tries instead of lists of sorted strings. As we will show in this paper, patricia tries can be efficiently merged having a superior performance in comparison to merging runs of sorted strings. We complete our discussion with a complexity analysis as well as a comprehensive performance evaluation, where our new approach outperforms traditional external merge sort by a factor of 4 for sorting over 4 billion strings of real world data

    Runtime Adaptive Hybrid Query Engine based on FPGAs

    Get PDF
    This paper presents the fully integrated hardware-accelerated query engine for large-scale datasets in the context of Semantic Web databases. As queries are typically unknown at design time, a static approach is not feasible and not flexible to cover a wide range of queries at system runtime. Therefore, we introduce a runtime reconfigurable accelerator based on a Field Programmable Gate Array (FPGA), which transparently incorporates with the freely available Semantic Web database LUPOSDATE. At system runtime, the proposed approach dynamically generates an optimized hardware accelerator in terms of an FPGA configuration for each individual query and transparently retrieves the query result to be displayed to the user. During hardware-accelerated execution the host supplies triple data to the FPGA and retrieves the results from the FPGA via PCIe interface. The benefits and limitations are evaluated on large-scale synthetic datasets with up to 260 million triples as well as the widely known Billion Triples Challenge

    Physical Activity Levels and Domains Assessed by Accelerometry in German Adolescents from GINIplus and LISAplus

    Get PDF
    Background Physical activity (PA) is a well-known and underused protective factor for numerous health outcomes, and interventions are hampered by lack of objective data. We combined accelerometers with diaries to estimate the contributions to total activity from different domains throughout the day and week in adolescents. Methods Accelerometric and diary data from 1403 adolescents (45% male, mean age 15.6 +/- 0.5 years) were combined to evaluate daily levels and domains of sedentary, light, and moderate- to-vigorous activity (MVPA) during a typical week. Freedson's cutoff points were applied to determine levels of activity. Total activity was broken down into school physical education (PE), school outside PE, transportation to school, sport, and other time. Results About 2/3 of adolescents' time was spent sedentary, 1/3 in light activity, and about 5% in MVPA. Boys and girls averaged 46 (SD 22) and 38 (23) minutes MVPA per day. Adolescents were most active during leisure sport, spending about 30% of it in MVPA, followed by PE (about 20%) transport to school (14%) and either school class time or other time (3%). PE provided 5% of total MVPA, while leisure sport provided 16% and transportation to school 8%. School was the most sedentary part of the day with over 75% of time outside PE spent sedentary. Conclusions These German adolescents were typical of Europeans in showing low levels of physical activity, with significant contributions from leisure sport, transportation and school PE. Leisure sport was the most active part of the day, and participation did not vary significantly by sex, study center (region of Germany) or BMI. Transportation to school was frequent and thus accounted for a significant fraction of total MVPA. This indicates that even in a population with good access to dedicated sporting activities, frequent active transportation can add significantly to total MVPA

    P-LUPOSDATE: Using Precomputed Bloom Filters to Speed Up SPARQL Processing in the Cloud

    Get PDF
    Increasingly data on the Web is stored in the form of Semantic Web data. Because of today's information overload, it becomes very important to store and query these big datasets in a scalable way and hence in a distributed fashion. Cloud Computing offers such a distributed environment with dynamic reallocation of computing and storing resources based on needs. In this work we introduce a scalable distributed Semantic Web database in the Cloud. In order to reduce the number of (unnecessary) intermediate results early, we apply bloom filters. Instead of computing bloom filters, a time-consuming task during query processing as it has been done traditionally, we precompute the bloom filters as much as possible and store them in the indices besides the data. The experimental results with data sets up to 1 billion triples show that our approach speeds up query processing significantly and sometimes even reduces the processing time to less than half
    • …
    corecore